3

I'm trying to figure out how to use the new outputProvider API on AVAssetReader. Specifically, I want to open a media file (video or audio), extract its audio, and write it back out as PCM. I have the following code:

import Foundation
import AVFoundation
enum AudioExtractorError: Error {
 case noAudio
 case bufferAllocationFailed
}
@available(macOS 26.0, iOS 26.0, *)
class AudioExtractor {
 init(fileURL: URL) {
 self.fileURL = fileURL
 }
 func extractAudio(from fileURL: URL, to outputFolder: URL) async throws -> URL {
 let asset = AVAsset(url: fileURL)
 guard let audioTrack = try await asset.loadTracks(withMediaType: .audio).first else {
 throw AudioExtractorError.noAudio
 }
 let reader = try AVAssetReader(asset: asset)
 let outputSettings: [String: Any] = [
 AVFormatIDKey: kAudioFormatLinearPCM,
 AVLinearPCMIsBigEndianKey: false,
 AVLinearPCMBitDepthKey: 32,
 AVLinearPCMIsFloatKey: true,
 AVLinearPCMIsNonInterleaved: false,
 AVNumberOfChannelsKey: 2
 ]
 let trackOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
 let outputProvider = reader.outputProvider(for: trackOutput)
 try reader.start()
 let outputURL = outputFolder
 .appendingPathComponent(UUID().uuidString)
 .appendingPathExtension("caf")
 let outputFile = try AVAudioFile(forWriting: outputURL, settings: outputSettings)
 while let sampleBuffer = try await outputProvider.next() {
 guard case let CMSampleBuffer.DynamicContent.dataBuffer(blockBuffer) = sampleBuffer.content else {
 continue
 }
 let blockSampleBuffer = CMReadySampleBuffer(dataBuffer: blockBuffer,
 formatDescription: sampleBuffer.formatDescription!,
 sampleProperties: sampleBuffer.sampleProperties)
 guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: outputFile.processingFormat, frameCapacity: AVAudioFrameCount(sampleBuffer.sampleCount)) else {
 throw AudioExtractorError.bufferAllocationFailed
 }
 try blockSampleBuffer.copyPCMData(fromRange: 0..<sampleBuffer.sampleCount, into: pcmBuffer.mutableAudioBufferList)
 try outputFile.write(from: pcmBuffer)
 }
 return outputURL
 }
 let fileURL: URL
}

When I run this, the call to blockSampleBuffer.copyPCMData() fails with error code -12731 which is kCMSampleBufferError_RequiredParameterMissing. Has anyone managed to get this API working?

asked Dec 10, 2025 at 18:57
3
  • I see that these APIs are new, but are they really new or just swift annotations for existing APIs? e.g. CMSampleBufferCopyPCMDataIntoAudioBufferList(). I guess those structs and enums go a little beyond the usual syntactic sugar. TIL! Commented Dec 11, 2025 at 11:23
  • I'm pretty sure they're actually new APIs, though they certainly do seem designed to be more Swifty, at least in that they include richer information for the type system and therefore more type safety. Not that it's enough to save you from the trademark inscrutable Core Audio/Core Media error codes... Commented Dec 11, 2025 at 23:49
  • Agreed, kCMSampleBufferError_RequiredParameterMissing was completely misleading. Commented Dec 12, 2025 at 8:19

1 Answer 1

3

It's great to see a new question on here!

There are two sources of copyPCMData() returning -12731 = kCMSampleBufferError_RequiredParameterMissing in your code:

  1. The AVAudioFile initializer you used gives you an interleaved fileFormat but a deinterleaved processingFormat which does not match AVAssetReaderTrackOutput's configuration (it's not obvious, I've mentioned this before)
  2. You're not setting the AVAudioPCMBuffer.frameLength which results in a 0 AudioBuffer.mDataByteSize

With the following changes, the code works for me:

- let outputFile = try AVAudioFile(forWriting: outputURL, settings: outputSettings)
+ let outputFile = try AVAudioFile(forWriting: outputURL, settings: outputSettings, commonFormat: .pcmFormatFloat32, interleaved: true) // note the inverted senses of interleaved vs AVLinearPCMIsNonInterleaved!
...
+ pcmBuffer.frameLength = AVAudioFrameCount(sampleBuffer.sampleCount)
 try blockSampleBuffer.copyPCMData(fromRange: 0..<sampleBuffer.sampleCount,
 into: pcmBuffer.mutableAudioBufferList)
answered Dec 11, 2025 at 10:59
Sign up to request clarification or add additional context in comments.

6 Comments

Impressive! I was suspecting the difference between processing format and file format, and I know apple really don't like planar(non-interleave) audio formats, that's why we need the second AVAudioFile initializer to set interleaved to true, right?
Thanks! I’d phrase it slightly differently: planar (non-interleaved) formats are more common in Apple’s capture APIs and less common in file I/O. Planarity ties closely to buffer sizing, which tends to matter more during capture and less during disk operations. With that in mind, I think the AVAudioFile initializer Andrew used was designed primarily for the workflow of writing captured audio to disk.
AVAudioFile has some funny API corner cases, some inherited fromExtAudioFile, some not. My favourite is that it lacked a close method for a decade or so: stackoverflow.com/a/52122691/22147
Many thanks for the answer!
Thanks! I figured out the frameLength problem (stupid mistake), but wouldn't have figured out the interleaving mismatch without your answer. And you mentioned it before in an answer to a question from me, no less! I wish I had a better memory...
Ha! I missed that that was you! Wish I had a better memory too.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.