-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for textures/cube maps + better audio processing #10
Comments
I tried implementing combined audio/freq channels and I did manage to create those samplers but when I try actually implementing it in AudioProcess class, I always get |
I found a shader that explains how the audio input works It's similar but there is the split missing Also it is not quite same, the implementation is definitely differet, in shadertoy the audio preview seems to correspond to what the texture looks like so if dual channel sampler was to be implemented, it should have same kind of input. |
I think I found the javascript implementation of shadertoy's audio input else if( inp.mInfo.mType==="mic" )
{
if( inp.loaded===false || inp.mForceMuted || wa === null || inp.mAnalyser == null )
{
let num = inp.mFreqData.length;
for( let j=0; j<num; j++ )
{
let x = j / num;
let f = (0.75 + 0.25*Math.sin( 10.0*j + 13.0*time )) * Math.exp( -3.0*x );
if( j<3 )
f = Math.pow( 0.50 + 0.5*Math.sin( 6.2831*time ), 4.0 ) * (1.0-j/3.0);
inp.mFreqData[j] = Math.floor(255.0*f) | 0;
}
for( let j=0; j<num; j++ )
{
let f = 0.5 + 0.15*Math.sin( 17.0*time + 10.0*6.2831*j/num ) * Math.sin( 23.0*time + 1.9*j/num );
inp.mWaveData[j] = Math.floor(255.0*f) | 0;
}
}
else
{
inp.mAnalyser.getByteFrequencyData( inp.mFreqData );
inp.mAnalyser.getByteTimeDomainData( inp.mWaveData );
}
if( this.mTextureCallbackFun!==null )
this.mTextureCallbackFun( this.mTextureCallbackObj, i, {wave:inp.mFreqData}, false, 5, 1, -1, this.mID );
if( inp.loaded===true )
{
let waveLen = Math.min( inp.mWaveData.length, 512 );
this.mRenderer.UpdateTexture(inp.globject, 0, 0, 512, 1, inp.mFreqData);
this.mRenderer.UpdateTexture(inp.globject, 0, 1, waveLen, 1, inp.mWaveData);
}
} |
this.mRenderer.UpdateTexture(inp.globject, 0, 0, 512, 1, inp.mFreqData);
this.mRenderer.UpdateTexture(inp.globject, 0, 1, waveLen, 1, inp.mWaveData); So basically the sound frequency and waveform are layered into a 512x2 texture, this corresponds with the explanation from the shader
|
I tried implementing it but I don't think I can, I have no idea what I'm doing when it comes to that FFT code. |
I am unable to provide support for this application at the moment. Good luck. |
It would be nice and very useful to implement support for images to be used as input for samplers. The way I picture it is that you put an image within the folder of the shader and it's filename will automatically be loaded as uniform.
Also since this requires sampler2D, it would be nice if there was iSound and iFreq with both channels combined as sampler2D, it would make porting of sound enabled shadertoy shaders much easier as I wouldn't have to combine these channels in the shader.
I am not sure how exactly shadertoy implements iChannel0 audio but it seems it has something to do with those peeks
The text was updated successfully, but these errors were encountered: