How not to develop sound engines



When programming sound in applications and games, I often had to rewrite the entire code base of sound modules, since many of them had either too complicated architecture, or, on the contrary, could not do anything other than simple playing sounds.



The analogy with rendering images in games works well with sound engines: If you have a too simple pipeline with a large number of abstractions, then you can hardly adequately program something more complicated than a cube with gears. On the other hand, if your entire code consists of direct OpenGL or D3D calls, then you cannot scale your spaghetti code without pain.



How relevant is the comparison with graphical rendering?



In sound rendering, the same processes take place as in graphics rendering: Updating resources from game logic, processing data into a digestible form, post-processing, output of the final result. All of this can take quite a long time, so for the sake of illustration, I'm using my audio library to test render performance.



SSD , Opus , , DSP (, ), . , : Inte Core i9 9900 4.5GHz, 32GB RAM, SSD 480GB SATA. 48000 44100.



    FRESPONZE_BEGIN_TEST
    if (!pFirstListener) return false;
    //      - 
    if (RingBuffer.GetLeftBuffers()) return false;
    RingBuffer.SetBuffersCount(RING_BUFFERS_COUNT);
    RingBuffer.Resize(Frames * Channels);
    OutputBuffer.Resize(Frames * Channels);
    tempBuffer.Resize(Channels, Frames);
    mixBuffer.Resize(Channels, Frames);

    for (size_t i = 0; i < RING_BUFFERS_COUNT; i++) {
        tempBuffer.Clear();
        mixBuffer.Clear();
        pListNode = pFirstListener;
        while (pListNode) {
            /*       */
            EmittersNode* pEmittersNode = nullptr;
            if (!pListNode->pListener) break;
            pListNode->pListener->GetFirstEmitter(&pEmittersNode);
            while (pEmittersNode) {
                tempBuffer.Clear();
                pEmittersNode->pEmitter->Process(tempBuffer.GetBuffers(), Frames);
                //  
                for (size_t o = 0; o < Channels; o++) {
                    MixerAddToBuffer(mixBuffer.GetBufferData((fr_i32)o), tempBuffer.GetBufferData((fr_i32)o), Frames);
                }

                pEmittersNode = pEmittersNode->pNext;
            }

            pListNode = pListNode->pNext;
        }

        /*         */
        PlanarToLinear(mixBuffer.GetBuffers(), OutputBuffer.Data(), Frames * Channels, Channels);
        RingBuffer.PushBuffer(OutputBuffer.Data(), Frames * Channels);
        RingBuffer.NextBuffer();
    }
    FRESPONZE_END_TEST("Audio render")


[00:00:59:703]: 'Audio render' operation passed: 551 microseconds
[00:00:59:797]: 'Audio render' operation passed: 512 microseconds
[00:00:59:906]: 'Audio render' operation passed: 541 microseconds
[00:01:00:000]: 'Audio render' operation passed: 583 microseconds


, , , . , , .



?



, . — -, C++, SoLoud OpenAL. , — . API , OpenAL Wwise.



. — ref_sound ISoundManager, , , , , . — , UAudioComponent Unreal Engine.



void Class::Function()
{
    //        
    snd.play_at_pos(0, Position(), false);

    //  
    if (IsHappened())
    {
        // ...
    }

    // ...
}


, — CSoundRender_Target, API OpenAL DirectSound, CSoundRender_Cache — Vorbis . target — , source + emitter.







, Core (FMOD Wwise), API (PortAudio).





,





, — (hardware) (mixer). , , API. .



:



void GameScheduler::Update() {
    // ...    

    // ,     .
    //    ,    
    //    .
    SoundManager::StopSound(id);

    // ...
}

// ...

void AudioHardware::Update() {
    // ...    

    //     : 
    //   ,    
    //      .
    AudioMixer::Render(input, frames);
    memcpy(output, input, frames * channels * frame_size);

    // ...
}


- . AudioHardware PortAudio, Windows Audio Session API. SoundManager — FMOD Wwise. AudioHardware , .



: routing emitters-source . DAW, , . , side-chain , . , - .







emitters-source. 2 — (source), , , (emitter) — , , -. emitters-source ( Wwise FMOD virtual emitters), .



emitters-source . miniaudio, .




All Articles