pouët.net

Coding assistants

category: code [glöplog]
*impostor
Let's continue the AI code generation discussion here, so the Revision thread can be about Revision things.

Opinions seem to range from "Everything should be allowed - AI is just another tool in the toolbox" to "If you as much as talk to an LLM while working on a demo, you should be banned from even visiting a demoparty ever again".

Call me naive, but I am confident that we can find a reasonable middle ground that will allow us to continue having fun making and watching demos.
added on the 2026-02-03 10:12:14 by Blueberry Blueberry
Quote:
Let's continue the AI code generation discussion here, so the Revision thread can be about Revision things.
That discussion started over there because the Anti-AI rules are hotly debated even in their team. :)

As for middle ground... There seems to be quite a fundamental difference in letting AI assistants do boring boilerplate coding gruntwork for you... and having them generate audiovisual assets for your demo.
added on the 2026-02-03 11:26:15 by Krill Krill
Quote:
There seems to be quite a fundamental difference in letting AI assistants do boring boilerplate coding gruntwork for you... and having them generate audiovisual assets for your demo.

Completely agree. This is one of the nuances that compo rules should take into account.
added on the 2026-02-03 11:51:39 by Blueberry Blueberry
Quote:
There seems to be quite a fundamental difference in letting AI assistants do boring boilerplate coding gruntwork for you...


Interesting. Best is when you administer all the boring coding gruntwork to AI and go with Unreal Engine or something. Right? Aren't you saying that, or at least going there? In that case, let's then admit that the demoscene is not really about "the art of code", it's about the art of audiovisual/motion design. And that (naive) conclusion was derived from what a true oldskool master coder had to say.
added on the 2026-02-03 13:39:48 by 4gentE 4gentE
No.
added on the 2026-02-03 13:57:10 by Krill Krill
My opinion, not that anyone asked for it or cares, is that it's a tool. We've always had tools that help do some stuff, and it's been obvious when they were used.

But in the hands of someone who really knows how to use the tools, the results have been amazing.

There's SO MUCH generated garbage out there right now, I don't even know what good use of these generators look like. Probably something that doesn't look like it's been generated.

When the bubble is gone, the tools still don't go anywhere; there's no going back to the "good old days". What the new normal with regards to generative tools looks like, probably nobody knows.
added on the 2026-02-03 14:24:04 by sol_hsa sol_hsa
For some more nuance: "AI is _EXCELLENT_ at solving solved problems", as a scener i keep in high regard so elegantly put it over on CSDb.

It won't generate the next dot or rotozoomer record for you - pushing the envelope is still something that requires human ingenuity.
added on the 2026-02-03 14:48:37 by Krill Krill
What would be nice is to have that power on my computer and in the hands of the community just like the "old tools" were instead of as a cloud service which charges actual money for each use and which retains all the distribution and modification power for a corporation which couldn't care less about freedom, and which provides the service without any remuneration to the creators who supplied it with source data.
added on the 2026-02-03 14:55:30 by fizzer fizzer
I second what @fizzer said.

Quote:
corporation which couldn't care less about freedom

You’re being too generous. It’s not that they don’t care about freedom, they detest it.

The endgame of this “tool” is to perform deskilling of people on a level and depth yet unseen. One final big move towards total control and enslavement.

If you (whoever “you” are) consider me and people like me boring and irritating that’s OK. I should be boring and irritating, that’s kinda the whole point. But if you fail to understand the importance of that single sentence paragraph above this one, we all (as humanity) have a really BIG problem.
added on the 2026-02-03 15:56:49 by 4gentE 4gentE
Quote:
Let's continue the AI code generation discussion here, so the Revision thread can be about Revision things.

Opinions seem to range from "Everything should be allowed - AI is just another tool in the toolbox" to "If you as much as talk to an LLM while working on a demo, you should be banned from even visiting a demoparty ever again".

Call me naive, but I am confident that we can find a reasonable middle ground that will allow us to continue having fun making and watching demos.


The thing is, some people truly do believe the second approach. In that if a LLM is used in any form, the demo and demogroup must be banned. Or at least detested at the bare minimum. It truly is to them a political/ideological approach worthy enough to stand on.

Just look what happened to GZDoom. Coder who genuinely knows how to code in the first place tried a LLM for the first time, instantly shunned by community permanently, due to the community believing LLMs infringe on the GPL.
added on the 2026-02-03 18:02:44 by ^ML!^ ^ML!^
seriously, there is quantitative evidence that even great programmers are less effective with LLM support, also please tell me one sane reason, from the expert's perspective, to accept the drawbacks GenAI has:
* internet is flooded with slop & crawlers
* expensive video cards & RAM due to datacenter demands, heating the planet in the process
* AI enables underperformers to be relevant without effort while increasing effort required for actual performers to maintain relevance <-- most important point
added on the 2026-02-03 19:00:22 by NR4 NR4
tbh, the entire discussion highlights that most people seem to have very poor metrics for the quality of information they're presented with
added on the 2026-02-03 19:01:52 by NR4 NR4
Quote:
What would be nice is to have that power on my computer and in the hands of the community just like the "old tools" were instead of as a cloud service which charges actual money for each use and which retains all the distribution and modification power for a corporation which couldn't care less about freedom, and which provides the service without any remuneration to the creators who supplied it with source data.


There are free open source tools and free open weights models that one can run locally on a completely airgapped machine. Hardware required is comparable to just a good gaming PC. The whole experience is lagging behind commercial cloud solutions by roughly 6-12 month, and is already very much practical.

But given that it's all very new and is an experimental exploratory stage, it's a bit tricky to set up properly, and requires quite a lot of wide range of domains expertise to figure out even the basics. It is still rather far from "i clicked this button, it made a demo". Although when it is already set up it is at the stage where my tiny laptop's integrated GPU can create a single-html-page-fullscreen-shader-canvas complex SDF scene with complex lighting and multiple objects from just a single simple sentence prompt within a couple of minutes (it used to be a smoke test that all commercial models failed miserably, now not even free open weights ones running locally pass).
added on the 2026-02-03 19:48:14 by provod provod
If people want to use LLMs for coding, let them. The demoscene has always been about challenges, and making something good using LLM generated code sounds like a really tricky one.
added on the 2026-02-03 20:44:33 by Radiant Radiant
I absolutely despise it being used for beautiful code or audiovisual art.
And the meaning of art is different for different people - for a lot of people in demoscene, code is art.

My issues with the data collection are related to the AI usage outside of what we are playing around with.

About assistance - if it's good enough eventually, it will feel dehumanizing. I'm already facing some people who are running their thoughts thru AI before thinking.
added on the 2026-02-03 22:13:50 by wrighter wrighter
What about tooling, I am using Gen AI to make different tools, and it's taken something that's often boring to being quite fun. The AI generated code meets the objective of the tool, even if it is often ugly.
added on the 2026-02-04 06:13:41 by iTeC iTeC
I've had Copilot enabled for a while, cause I got it for free. Below I list my experiences, purely from programmer's perspective, not commenting on environmental or societal issues. I am NOT commenting on using generative AI for art images or videos, which I absolutely detest.

The good:

It is pretty good as a refactoring tool, a bit faster than doing search-and-replace and a bit more capable than classical refactoring tools like "rename variable" or "extract method". For example, when I change my data structure layout (for example, a.c becomes a.b.c), Copilot sees me doing this a few times and can figure out how the rest of the code should be changed. Would I be able to do all this using the old ways? Absolutely. Would it be slower? Probably.

Copilot is also pretty good at generating function documentation. After writing about 25% of the documentation comment, Copilot does a pretty decent job filling in the rest. It spits out a decent, human readable explanation what a function does. Might have to do a few passes on it (accept suggestion, delete part of it, write new text, accept next suggestion etc.).

The bad:

Using Copilot to generate a function from scratch is almost always a mistake. It spits out a function very eagerly. And I don't find too many bugs in there, in the sense that functions would forget to check for nils and crash, or some other simple coding errors. But the problem is that it just doesn't do what you planned the function to do, and in the end, you just have to write the function yourself. Maybe you still can use Copilot to autocomplete the last 10-20%, like closing for-loops and adding returns and ending braces, but that's again very simple grunt work compared to what the AI tech bros are selling us.

The ugly:

It is very easy to veer into vibe coding territory to start having no plan for a function, just hitting tab to autocomplete and "trusting the process", even though some of the stuff looks already odd and out-of-place. And it is here that you start introducing massive bugs, because Copilot does not grasp the greater internal logic of the project. Messages get passed yes, but to wrong destinations; GUI gets redrawn even when there is no need for it or vice versa; etc. These kind of bugs can be very difficult to debug; comparable to you seeing a new codebase written by someone else and taking far more time than fixing your own bugs. Thus, one has to learn to actively resist the temptation and just ignore the autocomplete until you've almost done the work. My brain is already fighting the temptations to check Discord or watch cat videos; I absolutely detest I need to fight yet another temptation.

It most certainly does change habits. I had to write some Matlab code recently with an IDE that does not have any form of autocomplete. For the first hour or so, it felt totally weird to type every character by hand. "Wait; I actually have to go through every line and add a comma in the end? Jeez." I think I'm not too far out of reach yet, cause I was still able to adapt, but I'm wondering if I will be able to do the same 10 years later.

Summary and Conclusions

Good at refactoring and doing the final touches of the owl painting; absolutely terrible at drawing the rest of the owl. Tempting to be misused and changes our habits.

All that being said, I don't think a prod should be disqualified from a compo for using a programming aid just because "you didn't write the code". At best it speeds up grunt programming slightly, but can also slow you significantly down by introducing difficult bugs, so overall it's pretty neutral or even negative if misused.

If some orgas want to disqualify prods that have used coding tools to take a stance on the environmental issues and/or tech-bro oligarchy, I'm all for it. But I would not worry about "AIs generating an entire prod", not at least with today's tools. If you use today's tools to generate all code for a prod, it probably looks like shit and/or doesn't work at all. Maybe 10 years later the situation is different.
added on the 2026-02-04 08:19:11 by pestis pestis
Ban autocomplete from demo compos!

Quote:
it just doesn't do what you planned the function to do,


Thta's not so strange, is it? The most succinct way of expressing exactly what a function is supposed to do is by writing it in code, after all. The whole "AI will/can replace programmeers" idea is based on the misconception by non-programmers that programmers are somehow essentially gatekeeping programming by making it harder than it needs to be. It doesn't make any sense if you think about it, but that's vibe coding evangelism for you.

LLMs can be convincing; even some programmers are now fooling themselves into thinking that programming can somehow be reduced into providing general specifications and then letting an AI do the work, but they tend to come to regret it the more they attempt actually working that way. Typically they will end up having written so detailed specifications that really, just writing the code in the first place would have been faster.

AI is a tool, not a solution. Use it accordingly (and sparingly).
added on the 2026-02-04 08:41:07 by Radiant Radiant
One more addition to the list of bad things: Copilot is sometimes worse than traditional autocomplete on some very simple things, like completing the name of a method. It often hallucinates these methods, even if the method is defined in the same codebase and in principle Copilot should be able to know all the functions / methods. It just looks at the type definition, type name and/or name of the variable and makes a (bad) guess at what kind of methods it might have. So in this sense it does also introduce bugs, but in a strongly-typed language, IDE immediately marks it as red, so it's not really a big issue. But I imagine loosely typed languages might have a much bigger problem here.

I expect this issue to go away in a few years though.
added on the 2026-02-04 11:41:53 by pestis pestis
LLM-based autocomplete is not a good application of the technology, it hasn't been proven productive at all, and has been generally abandoned for years. There are way better workflows with tool-assisted agentic loops staged into planning, making tests, editing code, fixing test, doing reviews, etc.

Legal and moral issues aside (which are huge and valid topics on their own), an opinion either way would have more weight if it would be backed by solid understanding of the technology and its modern application practices. What it can and can't do.

IMO:
- it can't purely vibe-code anything reasonable into existence. The development process still has to be managed with a lot of domain expertise, so no real democratization there. And one has to be an engineer, a project manager, and a daycare teacher at the same time.
- it can't do anything too complex or too new. Frontiers or esoteric edge cases have to be handled manually.
However:
- It can do reasonably limited refactors, touching dozens of files. It can generate solid unit tests (which are usually a pain to write manually, and easy to check visually) and make sure that they pass.
- it can do very good code reviews, catching thing you've never thought about and never knew existed (even after decades of experience). It just knows the entire internet, and can quickly spot patterns that one just physically cannot learn in a single lifetime.
- it is already a real force multiplier. Using the toolset one can do in minutes what took hours manually. One can have multiple of these processes running in parallel for more complex tasks.

Practically, this tech isn't going away, unless we all die in a thermonuclear war. It's weather patterns, nothing a small community can do about. It's in my opinion more pragmatic to figure out how to live with it now, e.g. how to make sure we still can teach new engineers, and there are still real job prospects and career tracks for junior people.
added on the 2026-02-04 15:24:16 by provod provod
Quote:
Using the toolset one can do in minutes what took hours manually.


[citation needed]
added on the 2026-02-04 15:59:44 by Radiant Radiant
The stochastic nature of LLMs turns me off when thinking about coding. Also it won't be creative anymore. Antithetical to the nature of democoding, would mostly match the nature of corporate when they want results (but neither seems to work well, when I remember some news that some CEO fired their programmers for AI and then regretted this decision)

Conversing with them to give you some clues or ideas, ask about an API or language feature, duck debugging, that's ok for me.

But I would write everything, even the boiler plate/grudge work. I don't find even these kinds of work that bad. If it's that boring and repeatitive, maybe just automate it with a classic generative algorithm to be 100% sure the output is correct (and because it's more fun :).
added on the 2026-02-04 17:39:32 by Optimus Optimus
Quote:
Practically, this tech isn't going away, unless we all die in a thermonuclear war.

Why is that?
We limit it to big data analysis/pattern recognition jobs (like medical diagnostics) and stop shoving it into things where it doesn’t fit. But first, we burst that damn bubble, no matter the cost. It will only get more costly.
Since you mentioned a thermonuclear war, I remember a wise woman once said: "There's no fate but what we make for ourselves".
added on the 2026-02-04 17:39:52 by 4gentE 4gentE
Quote:
Quote:
Using the toolset one can do in minutes what took hours manually.


[citation needed]


I'm citing personal experience. It is very good at generating generic REST/CRUD/SQL boilerplate and associated unit tests. It's a lot of grunt work that is basically limited by typing speed, not thinking.

I haven't tried it yet on anything more serious, or trickier C/C++ or Vulkan stuff.

However, I mentioned earlier that it did surprise me with generating a perfect single page html+js+webgl+glsl fullscreen sdf raymarcher, using fully free range grass-fed ethically-sourced (not really, we just don't know) local open source tools and models, no cloud was harmed in the making. It took a couple of minutes. It would take me at least an hour to do the same manually, and I don't need to prove to anyone that I'm perfectly capable of hand-crafting complex SDF raymarchers really fast.
added on the 2026-02-04 18:51:57 by provod provod

login