Coding assistants
category: code [glöplog]
Quote:
Quote:Practically, this tech isn't going away, unless we all die in a thermonuclear war.
Why is that?
We limit it to big data analysis/pattern recognition jobs (like medical diagnostics) and stop shoving it into things where it doesn’t fit. But first, we burst that damn bubble, no matter the cost. It will only get more costly.
Since you mentioned a thermonuclear war, I remember a wise woman once said: "There's no fate but what we make for ourselves".
I mean, practically speaking, what power do you have to:
1. Go tell multiple multi-trillion dollar companies to "please stop onegai~". And they will be, like, "oh, i'm sorry 4gentE, oopsie-woopsie, our bad, we stop now, UwU"
2. Personally go to homes of millions of people who have already existing models downloaded locally and destroy all of their hardware to make sure there are no backups of it left.
Shoving it to where it doesn't fit is just the nature of hype cycle, happens every time with whatever latest fad is. Practically speaking one can (and should) refrain from participating, and then also make well-informed arguments for fellow humans to educate them on the matter.
Quote:
Practically speaking one can (and should) refrain from participating, and then also make well-informed arguments for fellow humans to educate them on the matter.
I couldn’t agree more.
Although, it seems that in world of today in some (increasing number of) cases, well-informed arguments fail to work. I mean, for example, just look at that turd sitting in the white house and how it got there. That’s the world we live in.
Usually I like bringing CFCs and leaded fuel as examples of tech that worked great for both users and capital but was (thank god) abandoned thanks to the power of well-informed arguments. Today, I don’t think we could accomplish anything like that. Because “democracy and freedom”. Go figure. The “do whatever thw f*ck you wanna do” wisdom we often hear.
Quote:
It is very good at generating generic REST/CRUD/SQL boilerplate
Is it really that much faster than just copy-pasting the boilerplate from some example code like we've always done?
Quote:
However, I mentioned earlier that it did surprise me with generating a perfect single page html+js+webgl+glsl fullscreen sdf raymarcher,
Same, hardly a novel thing, and you can readily piece together such code from what you can find on the web (which essentially is what the LLM is doing).
I'm not contesting that an LLM would do this faster, but I think there's a bit of hyperbole in exactly how much of a productivity boost it provides. Writing boilerplate code is only a small part of any given project I'm working on, so I don't see huge productivity gains from optimising that process. (I also like writing boilerplate to some extent; I find it focuses my mind. But that's on me; I know others hate it.)
I only just have a single dilemma about using generated boilerplate code and if that would be considered usage of AI in code or ethical usage in general or not.
If you want to do some API call for some simple but it doesn't have to do with logic just how the function call works, you ask chatGPT and gives you a 3 lines of code that work. If you copy-paste and test it and it works and it's obvious, would it be now inserting AI into your code? If I was to search on google or stack-overflow it will be the same 3 lines of code that I would copy-paste anyway in both cases.
In fact I did when I not long ago open sourcing my old demo CloseGL after I did various optimizations and fixes, I realized I didn't add vsync option. So I asked and got few lines to enable vsync in OpenGL through extensions. I think I must have disclosed this single usage in an nfo file or inside the source code in comments, I can't find it now. But as long as I said I hate using AI in my code, in this case I am lying, however it's very trivial code you would find by regular search.
If you want to do some API call for some simple but it doesn't have to do with logic just how the function call works, you ask chatGPT and gives you a 3 lines of code that work. If you copy-paste and test it and it works and it's obvious, would it be now inserting AI into your code? If I was to search on google or stack-overflow it will be the same 3 lines of code that I would copy-paste anyway in both cases.
In fact I did when I not long ago open sourcing my old demo CloseGL after I did various optimizations and fixes, I realized I didn't add vsync option. So I asked and got few lines to enable vsync in OpenGL through extensions. I think I must have disclosed this single usage in an nfo file or inside the source code in comments, I can't find it now. But as long as I said I hate using AI in my code, in this case I am lying, however it's very trivial code you would find by regular search.
Quote:
If I was to search on google...
Yeah, this could also give you an AI answer. Theoretically, even the search hit could be a link to something AI generated. And there will be no disclaimers. In fact, pretty soon most of the search hits will link to AI generated content. So this "personal ethic / live and let live" just won't work. Not in the way veganism works anyway. The world either needs way more aggressive strategy in order to resist this, or we may just give up (which seems to be the common behaviour).
..and its not even about three lines of vsync code tbh... I've wrestled with the same question from a completely different angle.
I was an illustrator before that market collapsed a long time ago, so I became a 3D artist... but that got commoditized.. *sigh*.. so I moved into game dev..when that collapsed, I became a Houdini generalist before that market dried up. Every single time, the thing I'd built my identity around got either restructured or pulled out from under me... the entire landscape shifted. Being a tool dev for the past ~8 years was a relative safe haven.. but with AI see the same pattern forming here... And I see ppl arguing about where the line is instead of looking at what's actually happening..
Optimus, your three lines of vsync code aint the problem and you're not cheating.. Google, Stack Overflow or sh*tjibbity, where#s the fucking difference.. different messenger. unreliable, inefficient and with questionable trustability, sure.. but as you said: if it works, it works. that's that.
I think the real question is bigger than any of us.. What happens when the pipe gets wider? When it's not three lines but thirty? boilerplate? architecture? When the next version of the tool doesn't just look up the answer but *truly* understands the problem?
Five careers dissolve under my feet, I didn't do anything wrong.. the world simply decided it didn't need this guy here in that particular configuration anymore.
The conversation about "where's the line" is the wrong conversation imho. We all know all to well that that line moves all the time.. I've been on the wrong side of it enough times to know :/ What do we do when it moves past all of us?
I was an illustrator before that market collapsed a long time ago, so I became a 3D artist... but that got commoditized.. *sigh*.. so I moved into game dev..when that collapsed, I became a Houdini generalist before that market dried up. Every single time, the thing I'd built my identity around got either restructured or pulled out from under me... the entire landscape shifted. Being a tool dev for the past ~8 years was a relative safe haven.. but with AI see the same pattern forming here... And I see ppl arguing about where the line is instead of looking at what's actually happening..
Optimus, your three lines of vsync code aint the problem and you're not cheating.. Google, Stack Overflow or sh*tjibbity, where#s the fucking difference.. different messenger. unreliable, inefficient and with questionable trustability, sure.. but as you said: if it works, it works. that's that.
I think the real question is bigger than any of us.. What happens when the pipe gets wider? When it's not three lines but thirty? boilerplate? architecture? When the next version of the tool doesn't just look up the answer but *truly* understands the problem?
Five careers dissolve under my feet, I didn't do anything wrong.. the world simply decided it didn't need this guy here in that particular configuration anymore.
The conversation about "where's the line" is the wrong conversation imho. We all know all to well that that line moves all the time.. I've been on the wrong side of it enough times to know :/ What do we do when it moves past all of us?
Quote:
When the next version of the tool doesn't just look up the answer but *truly* understands the problem?
Don't buy into the "AGI is impending" hype; there's an ocean of difference between that and today's LLMs, and we're already approaching the limits of what LLMs are capable of. Nobody even has an idea of what making an AGI would require. The AI snakeoil salesmen rely on the "everything is accelerating from here" narrative, because their business model requires it, so don't trust what they are predicting, but instead brace for the bubble bursting.
@Radiant AGI is another level completely, not talking about AGI... for AI to be a real threat we don't need superintelligence.. that's the whole point.. do you think I lost my careers because my skillset was outperformed?
I think we are suffering about this so much because of the AI hype is pushed down on everybody's throats. Massive amounts of money is poured into it, services are run on loss, to get people addicted so that there would be customers when the prices are ramped up what they actually should be.
It's all due to that "AGI is impending" hype exactly. Altman, Jensen and other assholes are talking about it, everybody is afraid that their careers will end, media talks about it constantly as it sells clicks, they give space to Sutskever, Musk and whoever says they are an expert on this. Everybody's trying the tools, using them for some things they are good for. Others see the tools being used, think if they should also so that they don't miss out. Gahh, it's such a mess. Whole thing is based on a hoax perpentuated by these snakeoil salesmen.
Don't panic about AI, but embrace yourself what happens afterwards. Climate change is still happening, oligarchs are trying to expand their influence around the world, and our economic system is completely broken and that is used to suck all wealth out from the middle class (perhaps the social class from where demoscene f.ex. mostly comes from)
It's all due to that "AGI is impending" hype exactly. Altman, Jensen and other assholes are talking about it, everybody is afraid that their careers will end, media talks about it constantly as it sells clicks, they give space to Sutskever, Musk and whoever says they are an expert on this. Everybody's trying the tools, using them for some things they are good for. Others see the tools being used, think if they should also so that they don't miss out. Gahh, it's such a mess. Whole thing is based on a hoax perpentuated by these snakeoil salesmen.
Don't panic about AI, but embrace yourself what happens afterwards. Climate change is still happening, oligarchs are trying to expand their influence around the world, and our economic system is completely broken and that is used to suck all wealth out from the middle class (perhaps the social class from where demoscene f.ex. mostly comes from)
Quote:
the world simply decided it didn't need this guy here in that particular configuration anymore.
"The world" didn't decide sh*t. "The system" decided. And inside the system there are people. There are those (in power) that delibrately push new tech upon formed jobs to deskill/demote/underpay experts like you. Then there are the "disloyal". They are your colleagues that never quite reached your level, so were eager to jump into something else. Then there are the wannabies, the charlatans that dreamed of creative careers but always lacked talent and determination, etc. But the people that are most annoying are the "useful idiots", the people that don't have "a horse in the race" at all, just want to come off as cool and profound. I work in a creative field myself, I've been around for ages, enough to see quite a few "revolutions" and "democratizations", so I totally understand what you're talking about. So yes, some peoples lighthearted "idontgiveafuckery" and cool-acting about it all comes off as uninformed, insensitive, even insulting. Perhaps because it's exacty that. Uninformed, insensitive, insultting.
What might have kept this in check would be socialist policies like those which were introduced out of a fear of another revolution like the communist revolution in Russia, and out of a desire to make certain countries (which could afford it) more attractive than others. The problem now is that there is no fear, there is no threat from the people or from an invader, and there is lessening support from the richest country in the world. So oppression and looting, and nepotism, and manipulation, can go unpunished with confidence.
Quote:
What about tooling, I am using Gen AI to make different tools, and it's taken something that's often boring to being quite fun. The AI generated code meets the objective of the tool, even if it is often ugly.
I think this is a very important distinction. Using GenAI for tooling does not run into the legal issue of not having the right to grant the party organizers a redistribution license. It also does not conflict with the "beautiful code" narrative of demomaking (that I very much agree with).
This line of reasoning leads to a natural place to draw the line for compo rules:
Any compo rule that is phrased in terms of using AI inevitably becomes vague, overly broad and practically unenforceable. On the other hand, if the anti-AI clause states that prods must not contain AI-generated content, this makes it much clearer what contestants can and cannot do. And it naturally covers all kinds of AI content in a way that (as far as I can see) treats the somewhat different nature of code in a reasonable way.
In that case, if the prompt "Dear AI make me a demomaker" produces a working demomaker, then you proceed to use this demomaker as a mere tool, did you end up with a viable demo? I mean, this is obviously pushing the rules, but wasn't the demoscene historically overly appreciative of "cleverly" pushing the rules?
The rejection of AI-generated content improves the audience experience, not necessarily the creator's experience.
@fizzer Yeah that sounds plausible, although I'm not old enough to really know how the world was perceived back then or what were the motivations for neoliberal policies at the time. From my parents I've only heard everybody[sic] was afraid of scared of a nuclear war and were relieved when the Soviet Union went down.
Quote:
What might have kept this in check would be socialist policies like those which were introduced out of a fear of another revolution like the communist revolution in Russia, and out of a desire to make certain countries (which could afford it) more attractive than others. The problem now is that there is no fear, there is no threat from the people or from an invader, and there is lessening support from the richest country in the world. So oppression and looting, and nepotism, and manipulation, can go unpunished with confidence.
I find this to be a pretty accurate picture of the world and the “AI” onslaught.
In short, it could be said that “Those in power are pushing poison onto all of us for their own financial benefit. Because they can”. Thing is, if this is so, then why are we even talking about this in this completely non-commercial subculture / scene? No techbros forcing anyone to do anything here. Oh, so they are not pushing poison on everybody. Some seem to take it eagerly.
Lots of my coworkers are quite horny for AI, no need for pushing there. It's quite sad.
I use ChatGPT to code for work.
I'm not an actual coder, but it allows me to complete a project in a reasonable time frame.
I've coded, but learning modern coding skills at my age and with my abilities is near impossible.
I'm not an actual coder, but it allows me to complete a project in a reasonable time frame.
I've coded, but learning modern coding skills at my age and with my abilities is near impossible.
Quote:
Why? I maintain that it's never to late.I've coded, but learning modern coding skills at my age and with my abilities is near impossible.
Quote:
Optimus, your three lines of vsync code aint the problem and you're not cheating.. Google, Stack Overflow or sh*tjibbity, where#s the fucking difference.. different messenger. unreliable, inefficient and with questionable trustability, sure.. but as you said: if it works, it works. that's that.
As I know from personal experience: wrong.
There is some people, even in this scene that maintain any usage at all is enough to be cancelled, thrown out and ridiculed, banned from competitions, etc, and publically hounded till the end of time, because to them AI usage is as much a ethical/political/moral minefield that even merely mentioning it is enough to be reviled.
There is seriously enough political baggage to them that even death threats is completely deserved.
Typing in LLM prompt = crime against humanity. Why are we even discussing this? :]
Quote:
The rejection of AI-generated content improves the audience experience, not necessarily the creator's experience.
Good point. It's about striking the right balance that maximises the overall enjoyment for creators and audience alike. Too lenient, and we might be drowning in slop (though I have not seen this, but some people in this thread have mentioned that they see some indications). Too many restrictions and too much control, and creators may decide that their scarce free time is not worth the hassle.
The formulation I suggested above is certainly towards the strict end of the spectrum. This basically corresponds to the Evoke 2025 rules. The sweet spot may very well be further towards the lenient end. This makes it a bit more tricky to define the rules precisely, though.
One such loosening could be to say that AI content is prohibited insofar as it substitutes for a creative process. This will still basically rule out graphics and music. As it applies to code, it will rule out using it for effect code, design, optimizations, basically all of the "beautiful" code, whereas it would allow things like boilerplate, format conversions, API usage and the like.
So you could use it to load and process the texture, but not wobble it? And who would judge this? I think it's a very arbitrary limitation.
The more I think of this, more I come to this conclusion: AI makes the demoscene as we know it obsolete. Most are sad about it, but some are happy.
Quote:
AI makes the demoscene as we know it obsolete.
I disagree. There's a huge range of motivations for releasing -
* experiencing self-efficacy (good luck achieving that with AI - that needs astronomic delulu levels)
* showing friends something you made and you're proud of (how could you if you used AI)
* winning a compo (lol, AI can help with that)
* pushing a technical limit (yeah no, AI doesn't do that)
* making art (up for discussion whether or not AI art is a thing, but who cares)
* self-therapy (really doesn't matter if AI is part of the process here or not)
* giving a friend with a great prod a fair and existent competition / filling a compo (yo, can do that with AI, but kinda whack, no?)
* attention-whoring (AI might be suitable for that)
And I'm sure I forgot a lot, but most of that isn't really an AI question IMHO
.jpg)