pouët.net

4k into scenes timings

category: code [glöplog]
*facepalm* Good idea.
Never thought of looking *back* in the channels; this can make a lot of things easier.
added on the 2016-08-17 21:28:04 by p01 p01
It does fit quite well with the usual, "stateless" way of doing things.

Of course, the lookup time doesn't have to be the current time (though it usually is). Looking a bit ahead can be used to make things happen *before* something happens in the music, which can work great as a build-up sometimes (e.g. something moving towards the point it should hit at the trigger time).
added on the 2016-08-17 22:15:54 by Blueberry Blueberry
Nice effects can be obtained by simply integrating note envelope values over time as well.
E.g. the beatsync in the 2nd and 3rd scene here http://www.pouet.net/prod.php?which=57526
added on the 2016-08-17 22:17:27 by gopher gopher
Quote:

How do you store timings of scenes in your intros? What approach, solution is the best and why? Example:
if (currtime > 00000 && currtime < 12000)
drawScene1();
...

I never really compared it, but I think a quite effective way is to calculate a float from the samplescount the music played divided by your pattern length and send it to your shader. The value is something like n.m where n is the current pattern and m a number from 0-1 with the progress within your pattern. Now you can write it like:
if (currtime-- < 1)
{doScene1(frac(curtime))}
else
if (currtime-- < 1)
{doScene2(frac(curtime))}

This should compress a bit better.
added on the 2016-08-19 09:56:05 by TGGC TGGC
Quote:
Quote:
urs: What do mean "beats"? Could you please clarify?


Your timing source should always be your audio device. This is the only reliable way to remain in audio-video sync, as opposed to any sort of timer or framecounter. It will normally give you the play position in samples, calculate music beats from that (by dividing by samples-per-beat). That way, your syncpoints will most likely be nice integer values (and probably multiples of 4).

The mercury demotool, for example, doesn't even have a concept of "time in seconds", all times are specified in terms of music beats.

This is of course useless if you use music with a highly non-even rhythm, in which case you'll have to figure it out yourself. :)


urs: Could you clarify more? Any example, pseudocode?:)
@littlejerome1:
The way I described it. t = 1.0f does not mean one second has passed, but one beat/ pattern of your music. So if something e.g. a scene transition should happen after a number of beats its not after 7.357 seconds but when t > n with a nice integer number which compresses fine.
added on the 2016-08-20 15:52:31 by TGGC TGGC
I mean clarification, example, psedocode of this part:

Quote:

It will normally give you the play position in samples, calculate music beats from that (by dividing by samples-per-beat).
Code: // Get your time (in seconds) from the music for accurate sync: playbackTime = getTimeFromMusicPlayer() // Get the BPM of the track from your favourite musician: bpm = 180 // Figure out how many seconds one beat takes: bps = bpm / 60 // beats per second (3) spb = 1/bps // seconds per beat (0.333) // Figure out how many beats you are into the track: timeInBeats = playbackTime / spb


Now flash the screen white and add some hypnoglow every 4th beat and your job is done.

It's also helpful sometimes to have a "bars" measure (typically 4 beats in a bar, so beats / 4). Then you just screen flash every bar ;)
added on the 2016-08-21 01:28:09 by psonice psonice
Note that you can optimise that slightly, since you know the BPM in advance:

Code:timeInBeats = playbackTime * 3
added on the 2016-08-21 01:29:39 by psonice psonice
If you're using 4klang, you've got all your necessary defines in the 4klang.h which gets created along with your song.
here some code:
Code: #define SCENE_LENGTH (SAMPLES_PER_TICK * 128) // 1 tick is related to a number of rows in my tracker, depending on quantization when recording. This define specifies the amount of audio samples in one scene in the intro. I hereby define how long a scene is, by tweaking that 128 to a different power of two number // I'm using the waveOut API and fetch my audio time with waveOutGetPosition to a MMTime structure, then I calculate the normalized time: float time = (float)MMTime.u.sample / SCENE_LENGTH;

In scene 0 time will be from 0.0 to 0.9999999...
In scene 1 time will be from 1.0 to 1.9999999...
In scene 2 time will be from 2.0 to 2.9999999...
etc.

If I want to know in what scene I am, I can do floor(time). If I want to know the internal scene time, normalized to a range from 0 to 1, I can do fract(time).
added on the 2016-08-22 04:24:53 by xTr1m xTr1m
... and syncing to a 4/4 beat means coding syncs for 0, 0.25, 0.5 and 0.75... or when floor(time*4) increments. Make your pick :)

Fading from one scene to the next is also easy, just multiply min(1,sin(fract(time)*2*pi)*c) with your pixel color. Tweak c as you like, that's the fade duration.
added on the 2016-08-22 10:55:19 by xTr1m xTr1m

login