Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

How to handle large audio files with Whisper API in openai-python without hitting memory limits? #2547

Discussion options

Hi,
I’m working with Whisper API in openai-python to transcribe long recordings (over 1 hour), and I’m running into memory and timeout issues when sending large files in one request.

You must be logged in to vote

If you’re running into memory or timeout issues with Whisper API when transcribing long recordings (1+ hours), it’s best to avoid sending the entire file in a single request. Large uploads increase both memory usage and the chance of hitting request timeouts.

Replies: 2 comments

Comment options

Think I need anything from the center

You must be logged in to vote
0 replies
Comment options

If you’re running into memory or timeout issues with Whisper API when transcribing long recordings (1+ hours), it’s best to avoid sending the entire file in a single request. Large uploads increase both memory usage and the chance of hitting request timeouts.

You must be logged in to vote
0 replies
Answer selected by Super-Mutec17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

AltStyle によって変換されたページ (->オリジナル) /