audio – Activate voice processing on iOS (iPhone)

0
4


I wish to activate the echo cancellation (Voice Processing) characteristic on my iOS audio pipeline.
I learn one thing about that, that I’ve to make use of the kAudioUnitSubType_VoiceProcessingIO subtype.

My VoIP app makes use of 2 AudioUnits, one unit for the mic facet and one other unit for the speaker facet.
So in a full-duplex audio name I exploit in the meanwhile 2 totally different audio models (I am undecided, if that’s allowed for voice processiong on iOS).

After I set this argument in my AudioUnit, the echo cancellation appears to work, however the audio high quality isn’t fairly good. It is troublesome to explain, however I’ve some background noise within the sign.

What I’ve to do, to optimize this and to take away this noise from my sign?
Right here is my code for the setup of my audio engines. I’ll publish not all the code, as a result of it is so much, as a substitute solely the items, who I feel it is related, if not please let me know.

Audio Session (audio format: PCM Int16, Pattern Fee: 16000, 1 Channel):

 do {
           let session = AVAudioSession.sharedInstance()
           strive session.setPreferredSampleRate(16000)
           strive session.setPreferredIOBufferDuration(0.02)
            
           strive session.setCategory(.playAndRecord)          
           strive session.setActive(true)
            
        } catch let error {
            Logger.log("Error whereas setup AVAudioSession: (error)", kind: .error)
        }

First the Recorder (AudioUnit) facet:

var componentDesc = AudioComponentDescription(
            componentType: kAudioUnitType_Output,
            componentSubType: kAudioUnitSubType_VoiceProcessingIO,
            componentManufacturer: kAudioUnitManufacturer_Apple,
            componentFlags: 0, componentFlagsMask: 0)

var streamFormatDesc = AudioStreamBasicDescription(
            mSampleRate: 16000,
            mFormatID: kAudioFormatLinearPCM,
            mFormatFlags: kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsNonInterleaved,
            mBytesPerPacket: 2,
            mFramesPerPacket: 1,
            mBytesPerFrame: 2,
            mChannelsPerFrame: 1,
            mBitsPerChannel: 16,
            mReserved: UInt32(0))

Right here is the playback engine (AuAudioUnit):

do {
            let audioComponentDescription = AudioComponentDescription(
                componentType: kAudioUnitType_Output,
                componentSubType: kAudioUnitSubType_VoiceProcessingIO,
                componentManufacturer: kAudioUnitManufacturer_Apple,
                componentFlags: 0, componentFlagsMask: 0)
            
            if auAudioUnit == nil {
                strive auAudioUnit = AUAudioUnit(componentDescription: audioComponentDescription)
                strive auAudioUnit.inputBusses[0].setFormat(16000)
                
                auAudioUnit.outputProvider = { (_, _, frameCount, _, inputDataList) -> AUAudioUnitStatus in
                    self.fillSpeakerBuffer(inputDataList: inputDataList, frameCount: Int(frameCount))
                    return(0)
                }
            }
            auAudioUnit.isOutputEnabled = true
            
            strive auAudioUnit.allocateRenderResources()  
            strive auAudioUnit.startHardware()           

LEAVE A REPLY

Please enter your comment!
Please enter your name here