Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for textures/cube maps + better audio processing #10

Open
SolsticeSpectrum opened this issue Oct 24, 2023 · 8 comments
Open

Comments

@SolsticeSpectrum
Copy link

SolsticeSpectrum commented Oct 24, 2023

It would be nice and very useful to implement support for images to be used as input for samplers. The way I picture it is that you put an image within the folder of the shader and it's filename will automatically be loaded as uniform.

Also since this requires sampler2D, it would be nice if there was iSound and iFreq with both channels combined as sampler2D, it would make porting of sound enabled shadertoy shaders much easier as I wouldn't have to combine these channels in the shader.

I am not sure how exactly shadertoy implements iChannel0 audio but it seems it has something to do with those peeks
image

image

@SolsticeSpectrum
Copy link
Author

I tried implementing combined audio/freq channels and I did manage to create those samplers but when I try actually implementing it in AudioProcess class, I always get malloc(): unaligned tcache chunk detected and I am not skilled enough to fix it.

@SolsticeSpectrum
Copy link
Author

SolsticeSpectrum commented Oct 25, 2023

Also the sound is significantly different. Here is what it looks like in the program:
image
I had to do some magic with combining left and right frequencies. iSoundL and iSoundR make it go all over the place so they are not suitable much.

vec4 combine(sampler1D texture1, sampler1D texture2, vec2 uv) {
  float value1 = texture(texture1, uv.x).x;
  float value2 = texture(texture2, uv.x).x;
  return vec4(value1, value2, 0.0, 0.0);
}

float fft = combine(iFreqL, iFreqR, vec2((n+float(i))*st, 0.25)).x * 10;
if (fft > 0.7) { // this limits it so it doesn't look that bad
  fft = 0.7; // Or apply a specific value or calculation
} 

Here is iSound for comparasion
image

And here is what it should really look like
image

@SolsticeSpectrum
Copy link
Author

SolsticeSpectrum commented Oct 25, 2023

I found a shader that explains how the audio input works
https://www.shadertoy.com/view/tljczR
image

It's similar but there is the split missing
image

Also it is not quite same, the implementation is definitely differet, in shadertoy the audio preview seems to correspond to what the texture looks like so if dual channel sampler was to be implemented, it should have same kind of input.

@SolsticeSpectrum
Copy link
Author

Here are the two compared with same audio input
image

@SolsticeSpectrum SolsticeSpectrum changed the title Add support for textures/cube maps Add support for textures/cube maps + better audio processing Oct 25, 2023
@SolsticeSpectrum
Copy link
Author

I think I found the javascript implementation of shadertoy's audio input

        else if( inp.mInfo.mType==="mic" )
        {
            if( inp.loaded===false || inp.mForceMuted || wa === null || inp.mAnalyser == null )
            {
                    let num = inp.mFreqData.length;
                    for( let j=0; j<num; j++ )
                    {
                        let x = j / num;
                        let f =  (0.75 + 0.25*Math.sin( 10.0*j + 13.0*time )) * Math.exp( -3.0*x );

                        if( j<3 )
                            f =  Math.pow( 0.50 + 0.5*Math.sin( 6.2831*time ), 4.0 ) * (1.0-j/3.0);

                        inp.mFreqData[j] = Math.floor(255.0*f) | 0;
                    }

                    for( let j=0; j<num; j++ )
                    {
                        let f = 0.5 + 0.15*Math.sin( 17.0*time + 10.0*6.2831*j/num ) * Math.sin( 23.0*time + 1.9*j/num );
                        inp.mWaveData[j] = Math.floor(255.0*f) | 0;
                    }
            }
            else
            {
                inp.mAnalyser.getByteFrequencyData(  inp.mFreqData );
                inp.mAnalyser.getByteTimeDomainData( inp.mWaveData );
            }

            if( this.mTextureCallbackFun!==null )
                this.mTextureCallbackFun( this.mTextureCallbackObj, i, {wave:inp.mFreqData}, false, 5, 1, -1, this.mID );

            if( inp.loaded===true )
            {
                let waveLen = Math.min( inp.mWaveData.length, 512 );
                this.mRenderer.UpdateTexture(inp.globject, 0, 0, 512,     1, inp.mFreqData);
                this.mRenderer.UpdateTexture(inp.globject, 0, 1, waveLen, 1, inp.mWaveData);
            }
        }

@SolsticeSpectrum
Copy link
Author

this.mRenderer.UpdateTexture(inp.globject, 0, 0, 512,     1, inp.mFreqData);
this.mRenderer.UpdateTexture(inp.globject, 0, 1, waveLen, 1, inp.mWaveData);

So basically the sound frequency and waveform are layered into a 512x2 texture, this corresponds with the explanation from the shader

The sound is input as a 512x2 texture with the bottom layer being the wave form where the brightness corrosponds
with the amplitude of the sample and the top layer being a frequency spectrum of the underlying sine wave
frequencies where brightness is the amplitude of the wave and each line represents average of 23 hz frequencies.
The texture is single channel red so the texture, when drawn, is red.

For music and audio visualisers the first channel is mainly used as its reactions to music are more noticable
and understandable. The bottom layer can be converted into a wave.

@SolsticeSpectrum
Copy link
Author

I tried implementing it but I don't think I can, I have no idea what I'm doing when it comes to that FFT code.

@bradleybauer
Copy link
Owner

I am unable to provide support for this application at the moment. Good luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants