pouët.net

Is this possible to code?

category: general [glöplog]
iq: I'm really sorry to tell you, but it is not going to work very fine.

It will not work better than this http://www.pouet.net/prod.php?which=51958 for example.

Just think in the 1-dimentional case. Like a sound waveform. So you place 2 points, 1 color and 1 blur, it could be 4 bytes in grayscale, 256 lenght. If you do an aditive version, you can make the color be -255 to 255 and the blur 0-127, for example, so it can add or subtract (I believe it would be the most efficient way).

What do you have? something like this: ___/'''''''''\____ per every line you trace. So you divide the segment in 5 parts, with 4 points. If you add another, you divide the segment at most in 9 parts, with 8 points, and so on. Maximum Number of parts = (number of elements*4)+1

With, for example, 20 elements, 81 parts at most, all linearly interpolated. 80 bytes for that. Not bad, but do you really think there will be any difference from a delta arithmetic compression? I don't think so. The delta would be easily compressible, while the elements will be highly entropical.

By other hand, details. If there is a big change in a single pixel, you will spend 4 bytes for it. So, this is not for textures or noisy images.

Finally, do you remember the compression thing I tried with circles? It was a very similar thing. I got very near to jpeg/wavelets, do you remember? (exactly the same thing for 16x16 pixels) So what? Near that point, there is not much more practical compression.

If you don't care about textures, then any method could be fine. But any compression that doesn't take in account textures (high frequencies) could not be a very useful one...
added on the 2008-12-11 08:30:11 by texel texel
Well, just one thing more...
BB Image

You might draw nice pigs. :P
added on the 2008-12-11 08:39:00 by texel texel
"collective consciousness", "emergent art", i like those concept names, better than the results themselves :>
added on the 2008-12-11 09:27:30 by Zest Zest
Easier way to figure this out would be to take the good old scortched earth.

Parameters: angle and power.
Fitness rating: how far was the target from explosion

Take a, say, hundred random values, check the results. Eliminate, say, 75% of the worst, and generate another hundred near the better 25% of the last round. Repeat until you don't get a better result.
added on the 2008-12-11 13:17:24 by sol_hsa sol_hsa
How hard can it be?...

Code:while (it doesn't look close enough to the desired result) { clear draw a bunch of random polygons }

added on the 2008-12-11 13:47:33 by cruzer cruzer
"dna".. this revives some old hatred from when I read a book about AI once, entitled "machine learning". (Stay away from that book, btw) The one thing I still remember learning from it was that the name "neural networks" is the most misguiding name for a function-optimizing framework. At least when trying to figure out what it is and what you can do with it, it doesnt help much to be thinking about neurons. When it's time to get your hands dirty and implement backpropagation, it turns out that your collection of "braincells" are nothing more than a finite set of smooth functions.

I am 100% sure these names where invented by some smartass at MIT in the process of writing a research proposal to the National Science Foundation. Smoke and mirrors!
added on the 2008-12-11 14:04:05 by Hyde Hyde
heh that's the same overall disappointment feeling among computer science students of any Artificial Intelligence class who get to know... not much about what scifi or pop culture puts under AI label :/
added on the 2008-12-11 14:21:59 by Zest Zest
i would rather call this type of classes 'Problem Solving'.
added on the 2008-12-11 14:23:54 by Zest Zest
Something tells me you'd get better results by starting out with a program that depends on a lot of user interaction to recognise features and such, then automating more and more of it until it can eventually do the whole image on its own. That's what my intuition says, anyway.

What I do know about AI is that attempts to find a simple basis for decision making, i.e. a small core set of rules that you simply apply "enough instances of", tend to fail miserably.
added on the 2008-12-11 21:05:13 by doomdoom doomdoom
Quote:

Quote:

Q) What is your fitness function?

A) The fitness function is a pixel by pixel compare where the fitness for each pixel is summed and compared to the parent.

So I guessed right.


Quote:

it's a simple hill climbing algorithm, and given the way it was implemented, it's extremely prone to get stuck in local maxima. so, in all likelihood, this is nowhere near the best approximation of the mona lisa you can get with 50 quads.

mind, this would actually be an interesting problem to try and solve using more sophisticated search techniques, but that's not what the author is doing. a very simple way to improve the algorithm is to use simulated annealing instead of hill climbing: if the modified solution is better than the current one, always accept it, and if it's worse, accept it with a certain probability (which you succesively decrease during the iteration). just as easy to code and way less likely to get stuck in a local extremum; the only fiddly part is how to modify the acceptance probability over time (the "cooling schedule" in SA parlance).


You don't sound that smart anymore Adok, now do ya? (Actually, you never sounded too smart in the first place).
added on the 2008-12-11 21:34:15 by kurli kurli
rainmaker: *gasp* How can you say such things about a Mensa-member?!?!!?
added on the 2008-12-12 01:15:43 by gloom gloom
The algorithm he described is not genetic programming, but simulated annealing without adaptive step width. To be more true to the paradigm of genetic algorithms he should have concurrent species and crossbreeding in addition to mutation.

added on the 2008-12-12 05:11:35 by Calexico Calexico
Oh yes, other people already pointed that out. I like simulated annealing.

added on the 2008-12-12 05:12:30 by Calexico Calexico
ryg: for it to be hill climbing he would need some kind of gradient function, which he does not.

added on the 2008-12-12 05:13:25 by Calexico Calexico
actually, nope :). that would be gradient descent (or gradient ascent if you're maximizing instead of minimizing), which is one particular type of hill-climbing algorithm.

and it's not simulated annealing if there's no acceptance probability for moves away from the current maximum/minimum - you can't just remove the key ingredient :).
added on the 2008-12-12 10:06:43 by ryg ryg
Well, at least one thing is for sure: It's not Hypnoglow.
added on the 2008-12-12 10:09:20 by kusma kusma
if you read enough books, it might just turn out to be!
added on the 2008-12-12 10:41:29 by Hyde Hyde
ryg, do you recommend Metropolis as method to generate drive the walk *in parameter space)? Do you have experience with it?
added on the 2008-12-12 11:40:30 by iq iq
i have no experience with metropolis; i've heard a very brief explanation of metropolis integration in a CG course (in one lecture about MLT), but that's about it. certainly nowhere near enough to have any meaningful opinion to offer.
added on the 2008-12-12 13:19:57 by ryg ryg
rainmaker: Why? I guessed right. That doesn't mean that it's the best algorithm I can think so.
added on the 2008-12-12 15:02:09 by Adok Adok
adok> could you please remind me your acclaimed IQ score?
Does anyone know the exact byte size of the result of this(its said its not triangles, but polygons(of what size of vertices)? And the code to actually "decompress/draw" it as well? I would guess a simple wavelet transform with a zerotree encoder/decoder would do a better job? I am gonna try it out!
added on the 2008-12-12 21:19:58 by LoneStarr LoneStarr
Quote:
Q) Is this Genetic Programming?, I think it is a GA or even a hill climbing algorithm.

A) I will claim that this is a GP due to the fact that the application clones and mutates an executable Abstract Syntax Tree (AST).

Even if the population is small, there is still competition between the parent and the child, the best fit of the two will survive.


...wat
added on the 2008-12-12 21:29:50 by Gargaj Gargaj
probably.. if you're a tyrannosaurus rex

login