pouët.net

AI crap in compo entries?

category: general [glöplog]
The problem people seem to be discussing in this thread, is not what is shown in a demo, but what tools the individual artist/creators use to create that content. The reason it not being what is shown in a demo, is to ask a question about the difference between getting inspiration from screenshots of a demo versus getting inspiration from drawings by an conceptual artist. In both cases the content is made up from RGB(A) pixels on the computer screen which each individual brain processes. If we brought AI into the same picture we could use that same statement about the brain.

People do like to use different things. Using tools, coding everything on their own or do both. In the early years of the scene we had much fewer tools. One had to be creative exploiting those tools and use other techniques on different kinds of things. Almost everyone had to write their code in assembly language, Turbo Pascal or C. Every graphic artist had to pixel using the few tools they had: Brilliance, Deluxe Paint etc and every musician had to make songs in Scream Tracker, Fast Tracker or a tracker that the coder of that group had made. It probably was as hard back then with the fewer tools they had as it is right now with the massive amount of tools and techniques that is available for everyone today. Was it a problem when 3d graphics cards became available, like 3DFX, Voodoo, Voodoo 2 etc.. Demosceners wanted to utilize them, be creative around the cards capabilities.

In the past 10-20 years the things about faster cpu-chips, newer 3d graphics cards and more techniques at our disposal seems to have been smeared out quite alot, and newcomers to the scene,when they watch a demo production sees a more clearer distinction between oldschool versus newschool. What some of them might not see, is how much hard work is underneath what it takes to make an demo on for example an 8-bit platform versus a highschool demo made in for example Unity. The other problem is that high-end technology is dynamic and low-end technology is fixed. In that case, it is easier to start out on an oldschool platform because one can focus on what is available and the hardware is not being improved (99% of the time) versus a newschool platform where new tools come out all the time and new Intel or AMD Cpus is being pushed out all the time, big companies releasing new versions of their graphics APIs. And this is where I think that more and more people who come from the past demoscene is more likely to work more on oldschool stuff because they are more comfortable with it, some will continue to do both, others will only continue on the trend with newer and faster technology and utilize that. AI is like a software that demosceners also want to exploit. One will also use AI to get inspired to do do new tricks on those oldschool platforms.

As others also know, AI can not generate a full oldschool demo, not even a simple rasterbar effect without the coder actually doing debugging, bugfixing or changing opcodes from the AI-generated code etc. It requires alot of work, so it would even be faster for the coder abandon the idea of letting the AI generate the code in the first place. Will the AI be able to do that in 10, 20 or 30 years, who knows?

Do every shadertoy-rendering look the same, similar to when fixed function pipelines looked the same? Yes and No. They probably don't look the same if we think that the technology on improving shaders have increased over the years. But are there similarities in every shader-production. Yes I think so. The same way as fixed-function pipeline renderings looked kind of plastic back then. Its just that its difficult to find words for something one has seen multiple times. It's like when a musician tells someone that they can hear the difference between 44khz CD quality and 96khz Hi-fi quality. Some they just cant hear the difference. Can you see the similarities between a very good demo made using shaders and one very bad demo made with shaders?

We could go back to the early and late 90s and make our own triangle fillers, raytracers etc. instead of abandoning AI for helping us out in various areas. Not everyone wants to do that, Not everyone wants to do the same thing. Which makes having categories that pleases everyone more difficult in the scene.

So.. what are the important questions to ask and how should the demoscene address them? How can the demoscene be comfortable with whatever implications the answers to those questions have.
added on the 2024-04-02 17:50:31 by rudi rudi
Quote:
this is often framed as democratizing a previously gatekept or hard to acquire resource

What was the gate being kept in this case, and by whom?
added on the 2024-04-02 18:00:34 by absence absence
Quote:
What was the gate being kept in this case, and by whom?


the gate of access to acquiring artistic skill, kept by the expensiveness and time consumption of getting an art education, if I understood the argument correctly.
added on the 2024-04-02 18:06:56 by wayfinder wayfinder
"Art and craft are different things" i have heard somewhere.
added on the 2024-04-02 18:17:54 by Krill Krill
Quote:
The problem people seem to be discussing in this thread, is not what is shown in a demo, but what tools the individual artist/creators use to create that content. The reason it not being what is shown in a demo, is to ask a question about the difference between getting inspiration from screenshots of a demo versus getting inspiration from drawings by an conceptual artist. In both cases the content is made up from RGB(A) pixels on the computer screen which each individual brain processes. If we brought AI into the same picture we could use that same statement about the brain.

People do like to use different things. Using tools, coding everything on their own or do both. In the early years of the scene we had much fewer tools. One had to be creative exploiting those tools and use other techniques on different kinds of things. Almost everyone had to write their code in assembly language, Turbo Pascal or C. Every graphic artist had to pixel using the few tools they had: Brilliance, Deluxe Paint etc and every musician had to make songs in Scream Tracker, Fast Tracker or a tracker that the coder of that group had made. It probably was as hard back then with the fewer tools they had as it is right now with the massive amount of tools and techniques that is available for everyone today. Was it a problem when 3d graphics cards became available, like 3DFX, Voodoo, Voodoo 2 etc.. Demosceners wanted to utilize them, be creative around the cards capabilities.

In the past 10-20 years the things about faster cpu-chips, newer 3d graphics cards and more techniques at our disposal seems to have been smeared out quite alot, and newcomers to the scene,when they watch a demo production sees a more clearer distinction between oldschool versus newschool. What some of them might not see, is how much hard work is underneath what it takes to make an demo on for example an 8-bit platform versus a highschool demo made in for example Unity. The other problem is that high-end technology is dynamic and low-end technology is fixed. In that case, it is easier to start out on an oldschool platform because one can focus on what is available and the hardware is not being improved (99% of the time) versus a newschool platform where new tools come out all the time and new Intel or AMD Cpus is being pushed out all the time, big companies releasing new versions of their graphics APIs. And this is where I think that more and more people who come from the past demoscene is more likely to work more on oldschool stuff because they are more comfortable with it, some will continue to do both, others will only continue on the trend with newer and faster technology and utilize that. AI is like a software that demosceners also want to exploit. One will also use AI to get inspired to do do new tricks on those oldschool platforms.

As others also know, AI can not generate a full oldschool demo, not even a simple rasterbar effect without the coder actually doing debugging, bugfixing or changing opcodes from the AI-generated code etc. It requires alot of work, so it would even be faster for the coder abandon the idea of letting the AI generate the code in the first place. Will the AI be able to do that in 10, 20 or 30 years, who knows?

Do every shadertoy-rendering look the same, similar to when fixed function pipelines looked the same? Yes and No. They probably don't look the same if we think that the technology on improving shaders have increased over the years. But are there similarities in every shader-production. Yes I think so. The same way as fixed-function pipeline renderings looked kind of plastic back then. Its just that its difficult to find words for something one has seen multiple times. It's like when a musician tells someone that they can hear the difference between 44khz CD quality and 96khz Hi-fi quality. Some they just cant hear the difference. Can you see the similarities between a very good demo made using shaders and one very bad demo made with shaders?

We could go back to the early and late 90s and make our own triangle fillers, raytracers etc. instead of abandoning AI for helping us out in various areas. Not everyone wants to do that, Not everyone wants to do the same thing. Which makes having categories that pleases everyone more difficult in the scene.

So.. what are the important questions to ask and how should the demoscene address them? How can the demoscene be comfortable with whatever implications the answers to those questions have.

interesting!
added on the 2024-04-02 18:19:49 by havoc havoc
Quote:
@Frost The point was that assuming everybody to be able to detect heavy AI usage makes stating the obvious superfluous, and renders any accusations of trying to fool people and related moot.

That assumption turned out to be invalid, which is an interesting thing to consider.


You don't have to apologize or put a disclaimer on every demo in which you use AI.

That initial assumption looks totally valid. Please, don't "correct" yourself just because some people are unable to tell if something was created using a lot of AI or not.

In any case, within a few years all people involved in graphic design or creation will regularly use some AI tool at some point in their creation process.

I have no doubt that Greippi is the artist of the Toxic Modulo adventures and the only way you could change my mind is to show me a Toxic Modulo comic published prior to these demos whose artist was someone else.

This means that Toxic Modulo is a genuine Amiga superhero, unlike Batman and the Powerpuff Girls.
added on the 2024-04-02 18:22:26 by ham ham
Quote:
In any case, within a few years all people involved in graphic design or creation will regularly use some AI tool at some point in their creation process.


fr? sounds like a bold statement tbh. don't believe the stupid tech bro babble man. unless you mean like the technical use of AI technology like smash's examples.
added on the 2024-04-02 18:26:55 by okkie okkie
Quote:
Typing words is not art …

Don't tell this an author. ;o)
added on the 2024-04-02 18:30:17 by gaspode gaspode
deliberately misreading what I meant! boooh! :D though, not all written word is art idd, and I know authors that would agree, so there!
added on the 2024-04-02 18:33:03 by okkie okkie
'typing prompts is not art', there! fixed.
added on the 2024-04-02 18:33:59 by okkie okkie
Quote:
sounds like a bold statement tbh.

Don't think so, because Adobe will incorporate more and more AI stuff in their tools.
added on the 2024-04-02 18:35:08 by gaspode gaspode
oh yeah, that terrible AI fill tool.. it sucks that companies try to push that shit and mediocre artists will use it.
added on the 2024-04-02 18:37:26 by okkie okkie
@okkie: There will be all kind of uses. One more technical and other more mundane. Even legal and illegal ones. Of course, I am talking of legal and fair use in any case.

In the case of this demo I cannot see any moral objection. It's more original than using yet another Boris Vallejo rip or using yet another comic book character created by Bob Kane or other guy.
added on the 2024-04-02 18:37:44 by ham ham
I don't think anyone was complaining about the morality? maybe in the way that AI art is basically a load of other people's stolen shit. For me it's that the result just sucks shit and it ruined the compo for a lot of people.
added on the 2024-04-02 18:40:36 by okkie okkie
@okkie: Well, maybe they should make the third issue of Toxic Modulo in a more dynamic way. Kind of like old Ozone demos like UFO or Smoke Bomb instead of just a slow paced comic. If Batman and the Powerpuff Girls can win demo compos, Toxic Modulo could too! :]
added on the 2024-04-02 18:45:42 by ham ham
idk man, I guess?
added on the 2024-04-02 18:49:18 by okkie okkie
@ham
Reading your hippie prophecies I always come to the (perhaps wrong) conclusion you’re very happy about the direction things seem to be going.
added on the 2024-04-02 18:54:26 by 4gentE 4gentE
@4gentE: The future is always unknown. I'm happy to be lucky enough to live in interesting times. :]
added on the 2024-04-02 18:57:57 by ham ham
Here to earn a glop?
added on the 2024-04-02 19:14:24 by XSM XSM
if we use all comments so far as AI prompt, bifat can make an even longer and even more boring amiga demo with it
There's a lot of talk about art, which is not without relevance. What interests me at least as much, and makes up a large amount of my personal demo experience, is craft. Generative AI tools are explicitly built to skip past the craft, which detracts hugely from my overall enjoyment of watching demos.

You wrote a prompt? Nice. I'm sure it's the inevitable future. But this isn't an ad agency, it's the little part of the world where we still fawn over 40 year old hardware. I always find it disappointing when the two get mixed up.
added on the 2024-04-02 19:28:53 by grip grip
Quote:
You wrote a prompt? Nice. I'm sure it's the inevitable future. But this isn't an ad agency.

This. These LLM AI prompt-to-result machines are ad agencies’ wet dream. Producers’ wet dream. Social media managers’ wet dream. A dream come true for all those who use cellphone as their primary work tool. Nightmare for everybody else. They were not made for artists, they were made against artists. AI prompt machines and their owners are to artists what Uber is to drivers. Enslavers. And, no, I don’t think the original intention can be subverted.
added on the 2024-04-02 19:46:31 by 4gentE 4gentE
ok here is the question :

1) A random "AI imaging" tool can take an image that you give it and a prompt, and "correct" the image by a user-defined percentage. At 0%, you get the same image but at 100% you get the image of the prompt, and everything in between. Where would you draw the line ?

2) Another tool will take your image and make it tileable ala photoshop, but also moves some features to make it easier on the eye. The tool is "AI" trained. Can you use it ?

3) Another tool takes your image and performs an edge extraction or style transfer. Is that acceptable ?

4) Is it acceptable to construct "interesting" noise patterns with an AI trained tool (not YOUR training btw) that you use as a generic background / source to drive other graphics ? This would be akin to display a static pre-rendered fractal or raytraced image that you did with fractint/povray instead of "your own renderer", back in the days. I remember it used to be common to write so "This is our own rendering". But is this still relevant now ?


btw on the matter of "it will drive pixellers out of job", I've heard this before for "true" coders: DOS vs windows, rasterizers vs Opengl, C++ vs Unity/unreal/notch . Of course this is very PC - centric view, and recognize things are very different in other platforms.
added on the 2024-04-02 20:06:10 by Navis Navis
Quote:
1) A random "AI imaging" tool can take an image that you give it and a prompt, and "correct" the image by a user-defined percentage. At 0%, you get the same image but at 100% you get the image of the prompt, and everything in between. Where would you draw the line ?
somewhere before the creation of the model trained on art without the informed, enthusiastic consent of the artists
added on the 2024-04-02 20:11:03 by wayfinder wayfinder
Quote:
You wrote a prompt? Nice. I'm sure it's the inevitable future. But this isn't an ad agency

A little louder to the people supposedly in the scene, please.
added on the 2024-04-02 20:21:28 by Gargaj Gargaj

login