Required number of additionally allocated bytes at the end of the input bitstream for decoding. This is mainly needed because some optimized bitstream readers read 32 or 64 bit at once and could read over the end.
Note: If the first 23 bits of the additional bytes are not 0, then damaged MPEG bitstreams could cause overread and segfault.
Enumerator | |
---|---|
AVDISCARD_NONE | discard nothing |
AVDISCARD_DEFAULT | discard useless packets like 0 size packets in avi |
AVDISCARD_NONREF | discard all non reference |
AVDISCARD_BIDIR | discard all bidirectional frames |
AVDISCARD_NONINTRA | discard all non intra frames |
AVDISCARD_NONKEY | discard all frames except keyframes |
AVDISCARD_ALL | discard all |
The default callback for AVCodecContext.get_buffer2().
It is made public so it can be called by custom get_buffer2() implementations for decoders without AV_CODEC_CAP_DR1 set.
Definition at line 1695 of file decode.c.
Referenced by alloc_frame_buffer(), ff_decode_preinit(), get_buffer(), init_context_defaults(), and submit_packet().
The default callback for AVCodecContext.get_encode_buffer().
It is made public so it can be called by custom get_encode_buffer() implementations for encoders without AV_CODEC_CAP_DR1 set.
Definition at line 59 of file encode.c.
Referenced by init_context_defaults().
Modify width and height values so that they will result in a memory buffer that is acceptable for the codec if you also ensure that all line sizes are a multiple of the respective linesize_align[i].
May only be used if a codec with AV_CODEC_CAP_DR1 has been opened.
Definition at line 134 of file utils.c.
Referenced by avcodec_align_dimensions(), and update_frame_pool().
Converts AVChromaLocation to swscale x/y chroma position.
The positions represent the chroma (0,0) position in a coordinates system with luma (0,0) representing the origin and luma(1,1) representing 256,256
Definition at line 350 of file utils.c.
Referenced by avcodec_chroma_pos_to_enum(), and mkv_write_video_color().
Converts swscale x/y chroma position to AVChromaLocation.
The positions represent the chroma (0,0) position in a coordinates system with luma (0,0) representing the origin and luma(1,1) representing 256,256
Decode the audio frame of size avpkt->size from avpkt->data into frame.
Some decoders may support multiple frames in a single AVPacket. Such decoders would then just decode the first frame and the return value would be less than the packet size. In this case, avcodec_decode_audio4 has to be called again with an AVPacket containing the remaining data in order to decode the second frame, etc... Even if no frames are returned, the packet needs to be fed to the decoder with remaining data until it is completely consumed or an error occurs.
Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input and output. This means that for some packets they will not immediately produce decoded output and need to be flushed at the end of decoding to get all the decoded data. Flushing is done by calling this function with packets with avpkt->data set to NULL and avpkt->size set to 0 until it stops returning samples. It is safe to flush even those decoders that are not marked with AV_CODEC_CAP_DELAY, then no samples will be returned.
Decode the video frame of size avpkt->size from avpkt->data into picture.
Some decoders may support multiple frames in a single AVPacket, such decoders would then just decode the first frame.
Decode a subtitle message.
Return a negative value on error, otherwise return the number of bytes used. If no subtitle could be decompressed, got_sub_ptr is zero. Otherwise, the subtitle is stored in *sub. Note that AV_CODEC_CAP_DR1 is not available for subtitle codecs. This is for simplicity, because the performance difference is expected to be negligible and reusing a get_buffer written for video codecs would probably perform badly due to a potentially very different allocation pattern.
Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input and output. This means that for some packets they will not immediately produce decoded output and need to be flushed at the end of decoding to get all the decoded data. Flushing is done by calling this function with packets with avpkt->data set to NULL and avpkt->size set to 0 until it stops returning subtitles. It is safe to flush even those decoders that are not marked with AV_CODEC_CAP_DELAY, then no subtitles will be returned.
Definition at line 1034 of file decode.c.
Referenced by decoder_decode_frame(), process_frame(), subtitle_handler(), transcode_subtitles(), try_decode_frame(), and wrap().
Supply raw packet data as input to a decoder.
Internally, this call will copy relevant AVCodecContext fields, which can influence decoding per-packet, and apply them when the packet is actually decoded. (For example AVCodecContext.skip_frame, which might direct the decoder to drop the frame contained by the packet sent with this function.)
Definition at line 589 of file decode.c.
Referenced by compat_decode(), compute_crc_of_packets(), cri_decode_frame(), dec_enc(), decode(), decode_audio_frame(), decode_packet(), decode_write(), decoder_decode_frame(), dng_decode_jpeg(), ff_load_image(), imm5_decode_frame(), LLVMFuzzerTestOneInput(), main(), movie_decode_packet(), process_frame(), run_test(), tdsc_decode_jpeg_tile(), try_decode_frame(), video_decode(), video_decode_example(), and wrap().
Return decoded output data from a decoder.
Definition at line 652 of file decode.c.
Referenced by audio_video_handler(), compat_decode(), compute_crc_of_packets(), cri_decode_frame(), dec_enc(), decode(), decode_audio_frame(), decode_packet(), decode_read(), decode_write(), decoder_decode_frame(), dng_decode_jpeg(), ff_load_image(), imm5_decode_frame(), main(), movie_push_frame(), process_frame(), run_test(), tdsc_decode_jpeg_tile(), try_decode_frame(), video_decode(), video_decode_example(), and wrap().
Supply a raw video or audio frame to the encoder.
Use avcodec_receive_packet() to retrieve buffered output packets.
For audio: If AV_CODEC_CAP_VARIABLE_FRAME_SIZE is set, then each frame can have any number of samples. If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last. The final frame may be smaller than avctx->frame_size.
Definition at line 364 of file encode.c.
Referenced by compat_encode(), do_audio_out(), do_video_out(), encode(), encode_audio_frame(), encode_frame(), encode_write(), encode_write_frame(), flush_encoders(), run_test(), wrap(), and write_frame().
Read encoded data from the encoder.
Definition at line 395 of file encode.c.
Referenced by compat_encode(), do_audio_out(), do_video_out(), encode(), encode_audio_frame(), encode_frame(), encode_write(), encode_write_frame(), flush_encoders(), run_test(), wrap(), and write_frame().
Create and return a AVHWFramesContext with values adequate for hardware decoding.
This is meant to get called from the get_format callback, and is a helper for preparing a AVHWFramesContext for AVCodecContext.hw_frames_ctx. This API is for decoding with certain hardware acceleration modes/APIs only.
The returned AVHWFramesContext is not initialized. The caller must do this with av_hwframe_ctx_init().
Calling this function is not a requirement, but makes it simpler to avoid codec or hardware API specific details when manually allocating frames.
Alternatively to this, an API user can set AVCodecContext.hw_device_ctx, which sets up AVCodecContext.hw_frames_ctx fully automatically, and makes it unnecessary to call this function or having to care about AVHWFramesContext initialization at all.
There are a number of requirements for calling this function:
The function will set at least the following fields on AVHWFramesContext (potentially more, depending on hwaccel API):
Essentially, out_frames_ref returns the same as av_hwframe_ctx_alloc(), but with basic frame parameters set.
The function is stateless, and does not change the AVCodecContext or the device_ref AVHWDeviceContext.
Definition at line 1228 of file decode.c.
Referenced by ff_decode_get_hw_frames_ctx(), and nvdec_init_hwframes().